Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 67(5): 1339-1359, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38535722

RESUMO

PURPOSE: We explore a new approach to the study of cognitive effort involved in listening to speech by measuring the brain activity in a listener in relation to the brain activity in a speaker. We hypothesize that the strength of this brain-to-brain synchrony (coupling) reflects the magnitude of cognitive effort involved in verbal communication and includes both listening effort and speaking effort. We investigate whether interbrain synchrony is greater in native-to-native versus native-to-nonnative communication using functional near-infrared spectroscopy (fNIRS). METHOD: Two speakers participated, a native speaker of American English and a native speaker of Korean who spoke English as a second language. Each speaker was fitted with the fNIRS cap and told short stories. The native English speaker provided the English narratives, and the Korean speaker provided both the nonnative (accented) English and Korean narratives. In separate sessions, fNIRS data were obtained from seven English monolingual participants ages 20-24 years who listened to each speaker's stories. After listening to each story in native and nonnative English, they retold the content, and their transcripts and audio recordings were analyzed for comprehension and discourse fluency, measured in the number of hesitations and articulation rate. No story retellings were obtained for narratives in Korean (an incomprehensible language for English listeners). Utilizing fNIRS technique termed sequential scanning, we quantified the brain-to-brain synchronization in each speaker-listener dyad. RESULTS: For native-to-native dyads, multiple brain regions associated with various linguistic and executive functions were activated. There was a weaker coupling for native-to-nonnative dyads, and only the brain regions associated with higher order cognitive processes and functions were synchronized. All listeners understood the content of all stories, but they hesitated significantly more when retelling stories told in accented English. The nonnative speaker hesitated significantly more often than the native speaker and had a significantly slower articulation rate. There was no brain-to-brain coupling during listening to Korean, indicating a break in communication when listeners failed to comprehend the speaker. CONCLUSIONS: We found that effortful speech processing decreased interbrain synchrony and delayed comprehension processes. The obtained brain-based and behavioral patterns are consistent with our proposal that cognitive effort in verbal communication pertains to both the listener and the speaker and that brain-to-brain synchrony can be an indicator of differences in their cumulative communicative effort. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.25452142.


Assuntos
Encéfalo , Cognição , Espectroscopia de Luz Próxima ao Infravermelho , Percepção da Fala , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Percepção da Fala/fisiologia , Masculino , Adulto Jovem , Feminino , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Projetos Piloto , Cognição/fisiologia , Multilinguismo , Fala/fisiologia , Idioma , Adulto
2.
Dyslexia ; 28(1): 60-78, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34612551

RESUMO

Auditory research in developmental dyslexia proposes that deficient auditory processing of speech underlies difficulties with reading and spelling. Focusing predominantly on phonological processing, studies have not yet addressed the role of the speaker-related (indexical) properties of speech that enable the formation of phonological representations. Here, we assess auditory processing of indexical characteristics cueing a speaker's regional dialect and gender to determine whether dyslexia constraints recognition of dialect features and voice gender. Adults and children aged 11-14 years with dyslexia and their age-matched controls responded to 360 unique sentences extracted from spontaneous conversations of 40 speakers. In addition to the original unprocessed speech, there were two focused filtered conditions (using lowpass filtering at 400 Hz and 8-channel noise vocoding) probing listeners' responses to segmental and prosodic cues. Compared with controls, both groups with dyslexia were significantly limited in their abilities to recognize dialect features from either set of cues. The results for gender suggest that their comparatively worse gender recognition in the noise-vocoded condition was possibly related to poor temporal resolution. We propose that the deficient processing of indexical cues by individuals with dyslexia originates in peripheral auditory processes, of which impaired processing of relevant temporal cues in amplitude envelope is a likely candidate.


Assuntos
Dislexia , Percepção da Fala , Adolescente , Adulto , Criança , Sinais (Psicologia) , Humanos , Idioma , Fonética , Fala
3.
J Acoust Soc Am ; 150(5): 3711, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34852578

RESUMO

The development of stop consonant voicing in English-speaking children has been documented as a progressive mastery of phonological contrast, but implementation of voicing within one voicing category has not been systematically examined. This study provides a comprehensive account of structured variability in phonetic realization of /b/ in running speech by 8-12-year-old American children (n = 48) when compared to adults (n = 36). The stop always occurred word-initially, was followed by either a voiced or voiceless coda, and its position varied in a sentence, which created systematic conditions to examine acoustic variability in closure duration (CD) and voicing during the closure (VDC) stemming from phonetic context and prosodic prominence. Children demonstrated command of long-distance anticipatory coarticulation, providing evidence that information about coda voicing is distributed over an entire monosyllabic word and is available in the onset stop. They also manifested covariation of cues to stop voicing and command of prosodic variation, despite greater random variability, greater CD, reduced VDC, and exaggerated execution of sentential focus when compared to adults. Controlling for regional variation, dialect was a significant predictor for adults but not for children, who no longer adhered to the marked local variants in their implementation of stop voicing.


Assuntos
Percepção da Fala , Voz , Adulto , Criança , Humanos , Idioma , Fonética , Fala , Acústica da Fala
4.
J Acoust Soc Am ; 150(6): 4103, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34972309

RESUMO

Although unfamiliar accents can pose word identification challenges for children and adults, few studies have directly compared perception of multiple nonnative and regional accents or quantified how the extent of deviation from the ambient accent impacts word identification accuracy across development. To address these gaps, 5- to 7-year-old children's and adults' word identification accuracy with native (Midland American, British, Scottish), nonnative (German-, Mandarin-, Japanese-accented English) and bilingual (Hindi-English) varieties (one talker per accent) was tested in quiet and noise. Talkers' pronunciation distance from the ambient dialect was quantified at the phoneme level using a Levenshtein algorithm adaptation. Whereas performance was worse on all non-ambient dialects than the ambient one, there were only interactions between talker and age (child vs adult or across age for the children) for a subset of talkers, which did not fall along the native/nonnative divide. Levenshtein distances significantly predicted word recognition accuracy for adults and children in both listening environments with similar impacts in quiet. In noise, children had more difficulty overcoming pronunciations that substantially deviated from ambient dialect norms than adults. Future work should continue investigating how pronunciation distance impacts word recognition accuracy by incorporating distance metrics at other levels of analysis (e.g., phonetic, suprasegmental).


Assuntos
Percepção da Fala , Adulto , Percepção Auditiva , Criança , Pré-Escolar , Humanos , Idioma , Ruído , Fonética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...